EN FR
EN FR


Section: Partnerships and Cooperations

European Initiatives

FP7 & H2020 Projects

EARS
  • Title: Embodied Audition for RobotS

  • Program: FP7

  • Duration: January 2014 - December 2016

  • Coordinator: Friedrich Alexander Universität Erlangen-Nünberg

  • Partners:

    • Aldebaran Roboticss (France)

    • Ben-Gurion University of the Negev (Israel)

    • Friedrich Alexander Universitat, Erlangen, Nurenberg (Germany)

    • Imperial College London (United Kingdom)

    • Humboldt-Universitat Zu Berlin (Germany)

  • Inria contact: Radu Horaud

  • The success of future natural intuitive human-robot interaction (HRI) will critically depend on how responsive the robot will be to all forms of human expressions and how well it will be aware of its environment. With acoustic signals distinctively characterizing physical environments and speech being the most effective means of communication among humans, truly humanoid robots must be able to fully extract the rich auditory information from their environment and to use voice communication as much as humans do. While vision-based HRI is well developed, current limitations in robot audition do not allow for such an effective, natural acoustic human-robot communication in real-world environments, mainly because of the severe degradation of the desired acoustic signals due to noise, interference and reverberation when captured by the robot's microphones. To overcome these limitations, EARS will provide intelligent 'ears' with close-to-human auditory capabilities and use it for HRI in complex real-world environments. Novel microphone arrays and powerful signal processing algorithms shall be able to localise and track multiple sound sources of interest and to extract and recognize the desired signals. After fusion with robot vision, embodied robot cognition will then derive HRI actions and knowledge on the entire scenario, and feed this back to the acoustic interface for further auditory scene analysis. As a prototypical application, EARS will consider a welcoming robot in a hotel lobby offering all the above challenges. Representing a large class of generic applications, this scenario is of key interest to industry and, thus, a leading European robot manufacturer will integrate EARS's results into a robot platform for the consumer market and validate it. In addition, the provision of open-source software and an advisory board with key players from the relevant robot industry should help to make EARS a turnkey project for promoting audition in the robotics world.

VHIA
  • Title: Vision and Hearing in Action

  • Program: FP7

  • Type: ERC

  • Duration: February 2014 - January 2019

  • Coordinator: Inria

  • Inria contact: Radu Horaud

  • The objective of VHIA is to elaborate a holistic computational paradigm of perception and of perception-action loops. We plan to develop a completely novel twofold approach: (i) learn from mappings between auditory/visual inputs and structured outputs, and from sensorimotor contingencies, and (ii) execute perception-action interaction cycles in the real world with a humanoid robot. VHIA will achieve a unique fine coupling between methodological findings and proof-of-concept implementations using the consumer humanoid NAO manufactured in Europe. The proposed multimodal approach is in strong contrast with current computational paradigms influenced by unimodal biological theories. These theories have hypothesized a modular view, postulating quasi-independent and parallel perceptual pathways in the brain. VHIA will also take a radically different view than today's audiovisual fusion models that rely on clean-speech signals and on accurate frontal-images of faces; These models assume that videos and sounds are recorded with hand-held or head-mounted sensors, and hence there is a human in the loop who intentionally supervises perception and interaction. Our approach deeply contradicts the belief that complex and expensive humanoids (often manufactured in Japan) are required to implement research ideas. VHIA's methodological program addresses extremely difficult issues: how to build a joint audiovisual space from heterogeneous, noisy, ambiguous and physically different visual and auditory stimuli, how to model seamless interaction, how to deal with high-dimensional input data, and how to achieve robust and efficient human-humanoid communication tasks through a well-thought tradeoff between offline training and online execution. VHIA bets on the high-risk idea that in the next decades, social robots will have a considerable economical impact, and there will be millions of humanoids, in our homes, schools and offices, which will be able to naturally communicate with us.

Inria International Partners

Informal International Partners
  • Professor Sharon Gannot, Bar Ilan University, Tel Aviv, Israel,

  • Professor Yoav Schechner, Technion, Haifa, Israel,

  • Dr. Miles Hansard, Queen Mary University London,

  • Dr. Thomas Hueber, Gipsa Lab, CNRS, Grenoble,

  • Professor Daniel Gatica Perez, IDIAP Institute, Martigny, Switzerand,

  • Professor Nicu Sebe, University of Trento, Trento, Italy,

  • Professor Adrian Raftery, University of Washington, Seattle, USA.

  • Dr. Zhengyou Zhang, Microsoft, Redmond WA, USA.